OpenAI completion
First class Sublime Text AI assistant with o3, o4-mini, gpt-4.1 and ollama support!
Details
Installs
- Total 7K
- Win 3K
- Mac 3K
- Linux 1K
May 12 | May 11 | May 10 | May 9 | May 8 | May 7 | May 6 | May 5 | May 4 | May 3 | May 2 | May 1 | Apr 30 | Apr 29 | Apr 28 | Apr 27 | Apr 26 | Apr 25 | Apr 24 | Apr 23 | Apr 22 | Apr 21 | Apr 20 | Apr 19 | Apr 18 | Apr 17 | Apr 16 | Apr 15 | Apr 14 | Apr 13 | Apr 12 | Apr 11 | Apr 10 | Apr 9 | Apr 8 | Apr 7 | Apr 6 | Apr 5 | Apr 4 | Apr 3 | Apr 2 | Apr 1 | Mar 31 | Mar 30 | Mar 29 | Mar 28 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Windows | 0 | 11 | 3 | 14 | 4 | 10 | 11 | 7 | 2 | 2 | 3 | 0 | 8 | 8 | 7 | 3 | 3 | 7 | 6 | 8 | 5 | 8 | 5 | 5 | 5 | 3 | 3 | 5 | 5 | 2 | 0 | 5 | 8 | 4 | 3 | 4 | 1 | 3 | 5 | 4 | 4 | 7 | 5 | 2 | 0 | 5 |
Mac | 3 | 4 | 1 | 5 | 7 | 5 | 3 | 7 | 5 | 4 | 6 | 4 | 10 | 2 | 8 | 4 | 0 | 7 | 7 | 6 | 7 | 7 | 5 | 5 | 9 | 5 | 7 | 6 | 8 | 4 | 3 | 7 | 1 | 8 | 3 | 2 | 4 | 3 | 4 | 3 | 7 | 3 | 15 | 1 | 1 | 7 |
Linux | 4 | 2 | 2 | 2 | 4 | 4 | 4 | 6 | 3 | 4 | 3 | 2 | 1 | 1 | 3 | 2 | 2 | 1 | 5 | 4 | 2 | 2 | 2 | 2 | 2 | 1 | 9 | 4 | 3 | 4 | 1 | 5 | 1 | 4 | 12 | 1 | 8 | 1 | 3 | 2 | 1 | 4 | 2 | 4 | 3 | 1 |
Readme
- Source
- raw.githubusercontent.com
OpenAI Sublime Text Plugin
tldr;
Cursor level of AI assistance for Sublime Text. I mean it.
Works with all OpenAI'ish API: llama.cpp server, ollama or whatever third party LLM hosting. Claude API support coming soon.
Features
- Chat mode powered by whatever model you'd like.
- o4-mini and o3 support.
- gpt-4.1 support
- llama.cpp's server, ollama and all the rest OpenAI'ish API compatible.
- Dedicated chats histories and assistant settings for a projects.
- Ability to send whole files or their parts as a context expanding.
- Phantoms Get non-disruptive inline right in view answers from the model.
- Markdown syntax with code languages syntax highlight (Chat mode only).
- Server Side Streaming (SSE) streaming support.
- Status bar various info: model name, mode, sent/received tokens.
- Proxy support.
Requirements
- Sublime Text 4
- llama.cpp, ollama installed OR
- Remote llm service provider API key, e.g. OpenAI
- Anthropic API key [coming soon].
Installation
Via Package Control
- Install the Sublime Text Package Control plugin if you haven't done this before.
- Open the command palette and type
Package Control: Install Package
. - Type
OpenAI
and pressEnter
.
Via Git Clone
- Go to your packages folder:
Preferences: Browse Packages
. - Run
git clone https://github.com/yaroslavyaroslav/OpenAI-sublime-text.git OpenAI\ completion
in that folder that Sublime opened. - Open Sublime Text and let it installed the dependencies.
- It may ask you to restart Sublime, do that if it does.
- Open Sublime again and type
OpenAI
and pressEnter
.
[!NOTE] Highly recommended complimentary packages: - https://github.com/SublimeText-Markdown/MarkdownCodeExporter - https://sublimetext-markdown.github.io/MarkdownEditing
Usage
AI Assistance use case
ChatGPT mode works the following way:
- Select some text or even the whole tabs to include them in request
- Run either
OpenAI: Chat Model Select
orOpenAI: Chat Model Select With Tabs
commands. - Input a request in input window if any.
- The model will print a response in output panel by default, but you can switch that to a separate tab with
OpenAI: Open in Tab
. - To get an existing chat in a new window run
OpenAI: Refresh Chat
. - To reset history
OpenAI: Reset Chat History
command to rescue.
[!NOTE] You suggested to bind at least
OpenAI: New Message
,OpenAI: Chat Model Select
andOpenAI: Show output panel
in sake for convenience, you can do that in plugin settings.
Chat history management
You can separate a chat history and assistant settings for a given project by appending the following snippet to its settings:
{
"settings": {
"ai_assistant": {
"cache_prefix": "/absolute/path/to/project/"
}
}
}
Additional request context management
You can add a few things to your request: - multi-line selection within a single file - multiple files within a single View Group
To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line).
To append the whole file(s) to request you should super+button1
on them to make whole tabs of them to become visible in a single view group and then run OpenAI: Add Sheets to Context
command. Sheets can be deselected with the same command.
You can check the numbers of added sheets in the status bar and on "OpenAI: Chat Model Select"
command call in the preview section.
Image handling
Image handle can be called by OpenAI: Handle Image
command.
It expects an absolute path to image to be selected in a buffer or stored in clipboard on the command call (smth like /Users/username/Documents/Project/image.png
). In addition command can be passed by input panel to proceed the image with special treatment. png
and jpg
images are only supported.
[!NOTE] Currently plugin expects the link or the list of links separated by a new line to be selected in buffer or stored in clipboard only.
In-buffer llm use case
Phantom use case
Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.
- [optional] Select some text to pass in context in to manipulate with.
- Pick
Phantom
as an output mode in quick panelOpenAI: Chat Model Select
. - You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
- You can hit
ctrl+c
to stop prompting same as with inpanel
mode.
Other features
Open Source models support (llama.cpp, ollama)
- Replace
"url"
setting of a given model to point to whatever host you're server running on (e.g.http://localhost:8080/v1/chat/completions
). - Provide a
"token"
if your provider required one. - Tweak
"chat_model"
to a model of your choice and you're set.
Google Gemini models
- Replace
"url"
setting of a given model to point to the Google Gemini OpenAI compatible API:https://generativelanguage.googleapis.com/v1beta/openai/chat/completions
. - Provide a
"token"
if your provider required one. - Tweak
"chat_model"
to a model from the list of supported models.
You can read more about OpenAI compatibility in the Gemini documentation.
[!NOTE] You can set both
url
andtoken
either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session.
Settings
The OpenAI Completion plugin has a settings file where you can set your OpenAI API key. This is required for the most of providers to work. To set your API key, open the settings within Preferences
-> Package Settings
-> OpenAI
-> Settings
and paste your API key in the token property, as follows:
{
"token": "sk-your-token",
}
Advertisement disabling
To disable advertisement you have to add "advertisement": false
line into an assistant setting where you wish it to be disabled.
Key bindings
You can bind keys for a given plugin command in Preferences
-> Package Settings
-> OpenAI
-> Key Bindings
. For example you can bind “New Message” including active tabs as context command like this:
{
"keys": [ "super+k", "super+'" ],
"command": "openai", // or "openai_panel"
"args": { "files_included": true }
},
Proxy support
You can setup it up by overriding the proxy property in the OpenAI completion
settings like follow:
"proxy": {
"address": "127.0.0.1", // required
"port": 9898, // required
"username": "account",
"password": "sOmEpAsSwOrD"
}
Disclaimers
[!WARNING] All selected code will be sent to the OpenAI servers (if not using custom API provider) for processing, so make sure you have all necessary permissions to do so.
[!NOTE] Dedicated to GPT3.5 that one the one who initially written at 80% of this back then. This was felt like a pure magic!